This new version of the MCMC2 program for modeling the thermodynamic and structural properties of multiply-charged clusters by means of parallel classical Monte Carlo methods provides some enhancements and corrections to the earlier version [1]. In particular, histograms for negatively and positively charged particles are separated, parallel Monte Carlo simulations can be performed by attempting exchanges between all the replica pairs and not only one randomly chosen pair, a new random number generator is supplied, and the contribution of Coulomb repulsion to the total heat capacity is corrected. The main functionalities of the original MCMC2 code (e.g., potential-energy surfaces and Monte Carlo algorithms) have not been modified. New version program summaryProgram title: MCMC2.Catalogue identifier: AENZ_v1_1Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENZ_v1_1.htmlProgram obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland.Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html.No. of lines in distributed program, including test data, etc.: 148028No. of bytes in distributed program, including test data, etc.: 1501936Distribution format: tar.gzProgramming language: Fortran 90 with MPI extensions for parallelizationComputers: x86 and IBM platformsOperating system: 1.CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH22.CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 + MPICH23.Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI4.IBM Power 6 4.7 GHz, xlf + PESS (IBM parallel library)Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: up to 999.RAM (per CPU core): 10–20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small clusters (N≤300−400).Classification: 23Catalogue identifier of previous version: AENZ_v1_0Journal reference of previous version: Comput. Phys. Comm. 184 (2013) 873Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply-charged clusters.Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply-charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs.Restrictions: The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge–charge, charge-induced dipole, induced dipole–induced dipole, polarisation). The Monte Carlo simulations can only be performed in the NVT ensemble in the present code.Unusual features: The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated.Running time: The running time depends on the number of Monte Carlo steps, cluster size, and the type of interactions selected (e.g., polarisation turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters (N≤100, not including polarisation interactions), to one week for large clusters (N≥1000 not including polarisation interactions), and several weeks for large clusters (N≥1000) when including polarisation interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase.Reasons for new version: The new version corrects some bugs identified in the previous version. It also provides the user with some new functionalities such as the separation of histograms for positively and negatively charged particles, a new scheme to perform parallel Monte Carlo simulations and a new random number generator.Summary of revisions1.Additional features of MCMC2 version 1.1 (a)Histograms for positively and negatively charged particles. The first version of MCMC2 was able to produce a large variety of histograms to investigate the structure of charged clusters and their propensity to undergo evaporation: angular histograms for charged particles (“angdist-yyy.dat” and “surfang-yyy.dat” files), radial histograms (“rhist-x-yyy.dat” files), and histograms for tracking the number of charged surface particles (“surfnb-yyy.dat” files) or the number of particles that tend to evaporate (“evapnb-yyy.dat” files). Although the program could handle clusters composed of both positively and negatively charged particles, these histograms did not separate these two classes of particles. This has been corrected by the addition of two columns in the histogram files. These columns are labelled with signs “+” and “−” that refer to positively and negatively charged particles, respectively. The study of clusters composed of both positively and negatively charged particles should therefore be made easier [2]. Most of the keywords corresponding to histograms have not been changed except the keyword for radial histograms that now includes the possibility not to print any histogram (see keyword “RADIAL” in the “Modified keywords” section).(b)Full replica pair exchange in Monte Carlo simulations. In previous publications [2, 3] the exchange between replica configurations was based on two steps: the random selection of one replica pair and the calculation of the Parallel Tempering or Parallel Charging criteria to accept or reject the exchange of configurations between replicas. By default these exchanges were attempted every ns=10 Monte Carlo sweeps, where a sweep is composed of N Monte Carlo steps for an N-particle cluster. The main goal of parallel Monte Carlo algorithms is to improve the convergence speed and we might thus be interested in attempting more than one exchange of replica configurations every ns sweeps, provided that ns is large enough for the final configuration (after the ns sweeps) to be decorrelated from the initial configuration (before the ns sweeps). In the present version of the code we have implemented a second way to perform parallel Monte Carlo simulations where N/2 exchanges of replica configurations are attempted every ns sweeps by alternatively selecting odd pairs (i.e., pairs involving replicas 1–2, 3–4, etc.) and even pairs (i.e., pairs involving replicas 2–3, 4–5, etc.). The two methods should be equivalent for large numbers of sweeps although the second method proposed in the present version is deemed to converge faster than the method based on random choices of replica pairs. Modifications brought to keywords “PC” and “PT” are reported in the “Modified keywords” section.(c)Lagged Fibonacci random number generator. Random numbers are generated by means of a LAPACK random number generator in the first version of MCMC2 [1]. Knowing that the main goal of MCMC2 is to handle Monte Carlo simulations on large clusters (from hundreds up to thousands of particles at least), we were particularly concerned in delivering a random number generator already thoroughly tested on neutral or charged clusters for such large systems. One of us (ML) developed a lagged Fibonacci random number generator based on the works by Kirkpatrick et al. [4] and Bhanot et al. [5]. In particular, this random number generator was successfully used in accurate diffusion quantum Monte Carlo investigations of the structure and energetics of small helium clusters [6], a benchmark convergence study of the dissociation energy of the HF dimer [7], and the fragmentation dynamics of ionized doped helium clusters [8, 9]. Modifications brought to keyword “SEED” are reported in the “Modified keywords” section. As an example, heat capacity curves of neutral A100 clusters obtained after performing Parallel Tempering simulations with the two random number generators used in MCMC2 are plotted in Fig. 1 of Supplementary materials. We can notice that the melting peak is perfectly defined by all four methods, some small deviations may occur for the premelting peak whose convergence is harder to achieve.2.Modifications or corrections to MCMC2 version 1.0 (a)In MCMC2 (version 1.0) we have computed the heat capacity related to Coulomb energy fluctuations that we have improperly called the “Coulomb part of heat capacity”. Indeed, when defining the potential energy U of a charged cluster as the sum of the Lennard-Jones (LJ) interactions V and the Coulomb interactions Vc we can define several quantities:Total heat capacity, CV: CV=CVCoul+CVLJ=1kBT2(〈U2〉−〈U〉2)Coulomb contribution to CV: CVCoul=1kBT2(〈UVc〉−〈U〉〈Vc〉)LJ contribution to CV: CVLJ=1kBT2(〈UV〉−〈U〉〈V〉)Coulomb heat capacity: CV,fluctCoul=1kBT2(〈Vc2〉−〈Vc〉2)LJ heat capacity:CV,fluctLJ=1kBT2(〈V2〉−〈V〉2) where we call “Coulomb heat capacity” and “LJ heat capacity” the heat capacities that are obtained by calculating the fluctuations of Coulomb and LJ interactions, respectively. After some calculations, we can find two formulas to express CV as a function of CVCoul, CVLJ, CV,fluctCoul and CV,fluctLJ: CV=2CVCoul−CV,fluctCoul+CV,fluctLJCV=2CVLJ+CV,fluctCoul−CV,fluctLJ which leads to (1)CV,fluctCoul−CV,fluctLJ=CVCoul−CVLJ. The difference between Coulomb and LJ heat capacities thus matches the difference between the LJ and Coulomb contributions to heat capacity. However, only the latter quantities have a clear physical meaning for charged clusters bound by both LJ and Coulomb interactions and we have therefore replaced the calculation of CV,fluctCoul by CVCoul in the present version of MCMC2. Note that the contribution of polarisation energy to heat capacity has also been added when polarisation is included.(b)Several minor corrections were brought into version 1.1: i.The word “RMS” (that usually stands for Root Mean Square) was wrongly written several times in the MCMC2-yyy.out output files instead of “standard deviation” that is abbreviated “std dev.”. This is corrected in the present version of the code.ii.The syntax used to generate some file names seemed not to be recognized by recent pgf95 compilers and has been modified. This does not affect the user since file names have not been modified.iii.The default minimum temperature for the geometric temperature scale is set to 10−10 instead of 0 that would have led to divergence if not changed by the user.iv.Test cases are slightly modified to take into account the new features of version 1.1 and the README.txt files are more detailed. The START folders and the obsolete open PBS scripts are removed from the distribution.3.Modified keywordsWe present in this section a list of the modified keywords. •PC METHOD imcpc REPLICAS N_repc EVERY pc_every TEMPERATURE kt0 QREF q_ref: Setting of parameters for running MCPC simulations. imcpc is an integer dedicated to the choice of the parallel scheme to be used when performing parallel charging simulations (0 = random choice of one replica pair with MCPC2, 1 = random choice of one replica pair with MCPC1, 2 = use of even or odd replica pairs with MCPC2, and 3 = use of even or odd replica pairs with MCPC1). N_repc is the number of replicas. This number must equal the number of CPUs used for the simulation: the MCMC2 code assumes that one replica is run on one CPU. Any different choice will lead to job abortion. Configuration swapping between replicas is tested every pc_every MC sweeps. kt0 is the reference temperature of the simulation (identical for all the replicas). A MCPC simulation will always use kt0 whatever the originally selected temperature scale (constant, geometric, or adjusted, see keyword “TEMPERATURE”). Defining a temperature scale in the setup file will only result in the replacement of these temperatures by kt0 (beware: if the user does not give a value to kt0, the default will be used). q_ref is the reference charge used for MCPC1 simulations. For MCPC2 simulations, the charges q are read from the input configuration files. The simulation is stopped if the input files have charges different from q_ref. For deactivating MCPC simulations, choose pc_every=0.Default: imcpc=0, N_repc=0, pc_every=0, kt0=0.1, q_ref=0.•PT METHOD imcpt REPLICAS N_rept EVERY pt_every: Setting of parameters for running MCPT simulations. imcpt is an integer dedicated to the choice of the parallel scheme to be used when performing parallel tempering simulations (0 = random choice of one replica pair and 1 = alternative use of even and odd replica pairs). N_rept is the number of replicas. This number must equal the number of CPUs used for the simulation: the MCMC2 code assumes that one replica is run on one CPU. Any different choice will lead to job abortion. Configuration swapping between replicas is tested every pt_every MC sweeps. For deactivating MCPT simulations, choose pt_every=0.Default: imcpt=0, N_rept=0 and pt_every=0.•RADIAL USE lrad GRIDRCOM deltagridstepgrid PARTICLE radtyp: Setting parameters for plotting one-particle radial histograms (whether lrad = .true.). The radial histograms cannot extend beyond the radius of the Monte Carlo container (see keyword “CONTAINER”), for graphical purposes we however allowed the user to add a small distance deltagrid to the grid size (=radius+deltagrid). stepgrid is the grid step for one-particle radial histograms with respect to the cluster center-of-mass. The grid origin is hardcoded to zero since these histograms are calculated with respect to the cluster center of mass. The number Ngrid of grid points is automatically determined in the code from the grid size and the grid step. radtyp enables the user to specify the type of particles to be considered for plotting radial histograms (0 = no distribution, 1 = all the particles without any distinction, 2 = charged particles only, 3 = neutral particles only, 4 = all the previous histograms (4=1+2+3)).Default: lrad = .false., deltagrid=0, stepgrid=0.1, radtyp=0.•SEED METHOD irand INITIALIZATION seed(1)seed(2)seed(3)seed(4) SCALING nsc(1)nsc(2)nsc(3)nsc(4): choice of the random number generator. irand is an integer that enables the user to select a random number generator (0 = LAPACK random number generator and 1 = lagged Fibonacci random number generator). Depending on the value of irand the random seeds and scaling factors may be different. If irand=0, seed(j) (j∈{1,2,3,4}) are positive integer seeds that must be below 4095 and seed(4) must be odd. N_rep (number of replicas) secondary seeds are generated from these primary seeds and the knowledge of scaling factors nsc(j) (j∈{1,2,3,4}). By default, these secondary seeds are obtained by doing seedi(j)=seed(j)+i×nsc(j) for j∈{1,2,3} and seedi(4)=seed(4)+2i×nsc(4) (since all the seeds seedi(4) must remain odd) where i is the replica number. If irand=1, only one seed (namely seed(1)) is expected by the program and secondary seeds are also produced by doing seedi(1)=seed(1)+i×nsc(1).Default: seed(j)=0 for j∈{1,2,3}, seed(4)=1, nsc(j)=1 for j∈{1,2,3,4} (i.e., seedi(j)=i for j∈{1,2,3} and seedi(4)=1+2i).